clip-02-05.jl
Load Julia packages (libraries) needed for the snippets in chapter 0
using StatisticalRethinking, Optim
gr(size=(600,300));Plots.GRBackend()snippet 3.2
Grid of 1001 steps
p_grid = range(0, step=0.001, stop=1);0.0:0.001:1.0all priors = 1.0
prior = ones(length(p_grid));1001-element Array{Float64,1}:
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
⋮
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0Binomial pdf
likelihood = [pdf(Binomial(9, p), 6) for p in p_grid];1001-element Array{Float64,1}:
0.0
8.374825191600018e-17
5.3438084689919926e-15
6.068652771862778e-14
3.399517250519025e-13
1.2929107734374954e-12
3.848982544705552e-12
9.676432504149003e-12
2.1495830280142858e-11
4.344655104237097e-11
⋮
4.098446591204723e-5
2.7622876204442173e-5
1.7500535729793716e-5
1.0188911348240822e-5
5.248259379330843e-6
2.227480958032322e-6
6.639762126411521e-7
8.34972583212597e-8
0.0As Uniform prior has been used, unstandardized posterior is equal to likelihood
posterior = likelihood .* prior;1001-element Array{Float64,1}:
0.0
8.374825191600018e-17
5.3438084689919926e-15
6.068652771862778e-14
3.399517250519025e-13
1.2929107734374954e-12
3.848982544705552e-12
9.676432504149003e-12
2.1495830280142858e-11
4.344655104237097e-11
⋮
4.098446591204723e-5
2.7622876204442173e-5
1.7500535729793716e-5
1.0188911348240822e-5
5.248259379330843e-6
2.227480958032322e-6
6.639762126411521e-7
8.34972583212597e-8
0.0Scale posterior such that they become probabilities
posterior = posterior / sum(posterior);1001-element Array{Float64,1}:
0.0
8.374825191541396e-19
5.3438084689545866e-17
6.068652771820298e-16
3.3995172504952293e-15
1.2929107734284453e-14
3.84898254467861e-14
9.67643250408127e-14
2.149583027999239e-13
4.3446551042066853e-13
⋮
4.0984465911760347e-7
2.7622876204248817e-7
1.7500535729671214e-7
1.01889113481695e-7
5.248259379294106e-8
2.22748095801673e-8
6.639762126365043e-9
8.349725832067523e-10
0.0snippet 3.3
Sample using the computed posterior values as weights
N = 10000
samples = sample(p_grid, Weights(posterior), N);10000-element Array{Float64,1}:
0.592
0.631
0.718
0.67
0.658
0.354
0.822
0.659
0.537
0.607
⋮
0.786
0.535
0.399
0.622
0.69
0.634
0.593
0.618
0.66In StatisticalRethinkingJulia samples will always be stored in an MCMCChains.Chains object.
chn = MCMCChains.Chains(reshape(samples, N, 1, 1), ["toss"]);Object of type Chains, with data of type 10000×1×1 Array{Float64,3}
Log evidence = 0.0
Iterations = 1:10000
Thinning interval = 1
Chains = 1
Samples per chain = 10000
parameters = toss
parameters
Mean SD Naive SE MCSE ESS
toss 0.6354 0.1385 0.0014 0.0013 10000Describe the chain
describe(chn)Log evidence = 0.0
Iterations = 1:10000
Thinning interval = 1
Chains = 1
Samples per chain = 10000
parameters = toss
Empirical Posterior Estimates
────────────────────────────────────────
parameters
Mean SD Naive SE MCSE ESS
toss 0.6354 0.1385 0.0014 0.0013 10000
Quantiles
────────────────────────────────────────
parameters
2.5% 25.0% 50.0% 75.0% 97.5%
toss 0.138 0.541 0.644 0.738 0.983Plot the chain
plot(chn)
Compute the MAP (maximumaposteriori) estimate
x0 = [0.5]
lower = [0.0]
upper = [1.0]
function loglik(x)
ll = 0.0
ll += log.(pdf.(Beta(1, 1), x[1]))
ll += sum(log.(pdf.(Binomial(9, x[1]), repeat([6], N))))
-ll
end
(qmap, opt) = quap(samples, loglik, lower, upper, x0)([0.666667], Results of Optimization Algorithm
* Algorithm: Fminbox with Gradient Descent
* Starting Point: [0.5]
* Minimizer: [0.6666666666544907]
* Minimum: 1.297811e+04
* Iterations: 4
* Convergence: true
* |x - x'| ≤ 0.0e+00: false
|x - x'| = 5.74e-11
* |f(x) - f(x')| ≤ 0.0e+00 |f(x)|: true
|f(x) - f(x')| = 0.00e+00 |f(x)|
* |g(x)| ≤ 1.0e-08: false
|g(x)| = 1.80e-06
* Stopped by an increasing objective: false
* Reached Maximum Number of Iterations: false
* Objective Calls: 108
* Gradient Calls: 108)Show optimization results
optResults of Optimization Algorithm
* Algorithm: Fminbox with Gradient Descent
* Starting Point: [0.5]
* Minimizer: [0.6666666666544907]
* Minimum: 1.297811e+04
* Iterations: 4
* Convergence: true
* |x - x'| ≤ 0.0e+00: false
|x - x'| = 5.74e-11
* |f(x) - f(x')| ≤ 0.0e+00 |f(x)|: true
|f(x) - f(x')| = 0.00e+00 |f(x)|
* |g(x)| ≤ 1.0e-08: false
|g(x)| = 1.80e-06
* Stopped by an increasing objective: false
* Reached Maximum Number of Iterations: false
* Objective Calls: 108
* Gradient Calls: 108Fit quadratic approcimation
quapfit = [qmap[1], std(samples, mean=qmap[1])]2-element Array{Float64,1}:
0.6666666666544907
0.14197159247653945snippet 3.4
Create a vector to hold the plots so we can later combine them
p = Vector{Plots.Plot{Plots.GRBackend}}(undef, 2)
p[1] = scatter(1:N, samples, markersize = 2, ylim=(0.0, 1.3), lab="Draws")
snippet 3.5
Analytical calculation
w = 6
n = 9
x = 0:0.01:1
p[2] = density(samples, ylim=(0.0, 5.0), lab="Sample density")
p[2] = plot!( x, pdf.(Beta( w+1 , n-w+1 ) , x ), lab="Conjugate solution")
Add quadratic approximation
plot!( p[2], x, pdf.(Normal( quapfit[1], quapfit[2] ) , x ), lab="Quap approximation")
plot(p..., layout=(1, 2))
End of clip_02_05.jl
This page was generated using Literate.jl.